13 research outputs found

    Multi-Agent Reinforcement Learning for Dynamic Ocean Monitoring by a Swarm of Buoys

    Full text link
    Autonomous marine environmental monitoring problem traditionally encompasses an area coverage problem which can only be effectively carried out by a multi-robot system. In this paper, we focus on robotic swarms that are typically operated and controlled by means of simple swarming behaviors obtained from a subtle, yet ad hoc combination of bio-inspired strategies. We propose a novel and structured approach for area coverage using multi-agent reinforcement learning (MARL) which effectively deals with the non-stationarity of environmental features. Specifically, we propose two dynamic area coverage approaches: (1) swarm-based MARL, and (2) coverage-range-based MARL. The former is trained using the multi-agent deep deterministic policy gradient (MADDPG) approach whereas, a modified version of MADDPG is introduced for the latter with a reward function that intrinsically leads to a collective behavior. Both methods are tested and validated with different geometric shaped regions with equal surface area (square vs. rectangle) yielding acceptable area coverage, and benefiting from the structured learning in non-stationary environments. Both approaches are advantageous compared to a na\"{i}ve swarming method. However, coverage-range-based MARL outperforms the swarm-based MARL with stronger convergence features in learning criteria and higher spreading of agents for area coverage.Comment: Accepted for Publication at IEEE/MTS OCEANS 202

    Estimating the potential for shared autonomous scooters

    Full text link
    Recent technological developments have shown significant potential for transforming urban mobility. Considering first- and last-mile travel and short trips, the rapid adoption of dockless bike-share systems showed the possibility of disruptive change, while simultaneously presenting new challenges, such as fleet management or the use of public spaces. In this paper, we evaluate the operational characteristics of a new class of shared vehicles that are being actively developed in the industry: scooters with self-repositioning capabilities. We do this by adapting the methodology of shareability networks to a large-scale dataset of dockless bike-share usage, giving us estimates of ideal fleet size under varying assumptions of fleet operations. We show that the availability of self-repositioning capabilities can help achieve up to 10 times higher utilization of vehicles than possible in current bike-share systems. We show that actual benefits will highly depend on the availability of dedicated infrastructure, a key issue for scooter and bicycle use. Based on our results, we envision that technological advances can present an opportunity to rethink urban infrastructures and how transportation can be effectively organized in cities

    Evaluating Visual Odometry Methods for Autonomous Driving in Rain

    Full text link
    The increasing demand for autonomous vehicles has created a need for robust navigation systems that can also operate effectively in adverse weather conditions. Visual odometry is a technique used in these navigation systems, enabling the estimation of vehicle position and motion using input from onboard cameras. However, visual odometry accuracy can be significantly impacted in challenging weather conditions, such as heavy rain, snow, or fog. In this paper, we evaluate a range of visual odometry methods, including our DROIDSLAM based heuristic approach. Specifically, these algorithms are tested on both clear and rainy weather urban driving data to evaluate their robustness. We compiled a dataset comprising of a range of rainy weather conditions from different cities. This includes, the Oxford Robotcar dataset from Oxford, the 4Seasons dataset from Munich and an internal dataset collected in Singapore. We evaluated different visual odometry algorithms for both monocular and stereo camera setups using the Absolute Trajectory Error (ATE). Our evaluation suggests that the Depth and Flow for Visual Odometry (DF-VO) algorithm with monocular setup worked well for short range distances (< 500m) and our proposed DROID-SLAM based heuristic approach for the stereo setup performed relatively well for long-term localization. Both algorithms performed consistently well across all rain conditions.Comment: 8 pages, 4 figures, Accepted at IEEE International Conference on Automation Science and Engineering (CASE) 202

    Bimodal information analysis for emotion recognition

    No full text
    We present an audio-visual information analysis system for automatic emotion recognition. We propose an approach for the analysis of video sequences which combines facial expressions observed visually with acoustic features to automatically recognize five universal emotion classes: Anger, Disgust, Happiness, Sadness and Surprise. The visual component of our system evaluates the facial expressions using a bank of 20 Gabor filters that spatially sample the images. The audio analysis is based on global statistics of voice pitch and intensity along with the temporal features like speech rate and Mel Frequency Cepstrum Coefficients. We combine the two modalities at feature and score level to compare the respective joint emotion recognition rates. The emotions are instantaneously classified using a Support Vector Machine and the temporal inference is drawn based on scores obtained as the output of the classifier. This approach is validated on a posed audio-visual database and a natural interactive database to test the robustness of our algorithm. The experiments performed on these databases provide encouraging results with the best combined recognition rate being 82%.Nous présentons un système d'analyse des informations audiovisuelles pour la reconnaissance automatique d'émotion. Nous proposons une méthode pour l'analyse de séquences vidéo qui combine des observations visuelles et sonores permettant de reconnaître automatiquement cinq classes d'émotion universelle : la colère, le dégoût, le bonheur, la tristesse et la surprise. Le composant visuel de notre système évalue les expressions du visage à l'aide d'une banque de 20 filtres Gabor qui échantillonne les images dans l'espace. L'analyse audio est basée sur des données statistiques du ton et de l'intensité de la voix ainsi que sur des caractéristiques temporelles comme le rythme du discours et les coefficients de fréquence Mel Cepstrum. Nous combinons les deux modalités, fonctionnalité et pointage, pour comparer les taux de reconnaissance respectifs. Les émotions sont classifiées instantanément à l'aide d'une « Support Vector Machine » et l'inférence temporelle est déterminée en se basant sur le pointage obtenu à la sortie du classificateur. Cette approche est validée en utilisant une base de données audiovisuelles et une base de données interactives naturelles afin de vérifier la robustesse de notre algorithme. Les expériences effectuées sur ces bases de données fournissent des résultats encourageants avec un taux de reconnaissance combinée pouvant atteindre 82%

    Multi-Robot Exploration and Rendezvous on Graphs

    No full text
    Abstract — We address the problem of arranging a meeting (or rendezvous) between two or more robots in an unknown bounded topological environment, starting at unknown locations, without any communication. The goal is to rendezvous in minimum time such that the robots can share resources for performing any global task. We specifically consider a global exploration task executed by two or more robots. Each robot explores the environment simultaneously, for a specified time, then selects potential rendezvous locations, where it expects to find other robots, and visits them. We propose a ranking criterion for selecting the order in which potential rendezvous locations will be visited. This ranking criterion associates a cost for visiting a rendezvous location and gives an expected reward of finding other agents. We evaluate the time taken to rendezvous by varying a set of conditions including: world size, number of robots, starting location of each robot and the presence of sensor noise. We present simulation results to quantify the effect of the aforementioned factors on the rendezvous time

    Multi-agent

    No full text
    rendezvous on street network

    2014 Canadian Conference on Computer and Robot Vision Asymmetric Rendezvous Search at Sea

    No full text
    Abstract — In this paper we address the rendezvous problem between an autonomous underwater vehicle (AUV) and a passively floating drifter on the sea surface. The AUV’s mission is to keep an estimate of the floating drifter’s position while exploring the underwater environment and periodically attempting to rendezvous with it. We are interested in the case where the AUV loses track of the drifter, predicts its location and searches for it in the vicinity of the predicted location. We parameterize this search problem with respect to both the uncertainty in the drifter’s position estimate and the ratio between the drifter and the AUV speeds. We examine two search strategies for the AUV, an inward spiral and an outward spiral. We derive conditions under which these patterns are guaranteed to find a drifter, and we empirically analyze them with respect to different parameters in simulation. In addition, we present results from field trials in which an AUV successfully found a drifter after periods of communication loss during which the robot was exploring. I
    corecore